63 research outputs found

    Climate-informed stochastic hydrological modeling: Incorporating decadal-scale variability using paleo data

    Get PDF
    A hierarchical framework for incorporating modes of climate variability into stochastic simulations of hydrological data is developed, termed the climate-informed multi-time scale stochastic (CIMSS) framework. A case study on two catchments in eastern Australia illustrates this framework. To develop an identifiable model characterizing long-term variability for the first level of the hierarchy, paleoclimate proxies, and instrumental indices describing the Interdecadal Pacific Oscillation (IPO) and the Pacific Decadal Oscillation (PDO) are analyzed. A new paleo IPO-PDO time series dating back 440 yr is produced, combining seven IPO-PDO paleo sources using an objective smoothing procedure to fit low-pass filters to individual records. The paleo data analysis indicates that wet/dry IPO-PDO states have a broad range of run lengths, with 90% between 3 and 33 yr and a mean of 15 yr. The Markov chain model, previously used to simulate oscillating wet/dry climate states, is found to underestimate the probability of wet/dry periods >5 yr, and is rejected in favor of a gamma distribution for simulating the run lengths of the wet/dry IPO-PDO states. For the second level of the hierarchy, a seasonal rainfall model is conditioned on the simulated IPO-PDO state. The model is able to replicate observed statistics such as seasonal and multiyear accumulated rainfall distributions and interannual autocorrelations. Mean seasonal rainfall in the IPO-PDO dry states is found to be 15%-28% lower than the wet state at the case study sites. In comparison, an annual lag-one autoregressive model is unable to adequately capture the observed rainfall distribution within separate IPO-PDO states. Copyright © 2011 by the American Geophysical Union.Benjamin J. Henley, Mark A. Thyer, George Kuczera and Stewart W. Frank

    Climate-informed stochastic hydrological modeling: Incorporating decadal-scale variability using paleo data

    Get PDF
    A hierarchical framework for incorporating modes of climate variability into stochastic simulations of hydrological data is developed, termed the climate-informed multi-time scale stochastic (CIMSS) framework. A case study on two catchments in eastern Australia illustrates this framework. To develop an identifiable model characterizing long-term variability for the first level of the hierarchy, paleoclimate proxies, and instrumental indices describing the Interdecadal Pacific Oscillation (IPO) and the Pacific Decadal Oscillation (PDO) are analyzed. A new paleo IPO-PDO time series dating back 440 yr is produced, combining seven IPO-PDO paleo sources using an objective smoothing procedure to fit low-pass filters to individual records. The paleo data analysis indicates that wet/dry IPO-PDO states have a broad range of run lengths, with 90% between 3 and 33 yr and a mean of 15 yr. The Markov chain model, previously used to simulate oscillating wet/dry climate states, is found to underestimate the probability of wet/dry periods >5 yr, and is rejected in favor of a gamma distribution for simulating the run lengths of the wet/dry IPO-PDO states. For the second level of the hierarchy, a seasonal rainfall model is conditioned on the simulated IPO-PDO state. The model is able to replicate observed statistics such as seasonal and multiyear accumulated rainfall distributions and interannual autocorrelations. Mean seasonal rainfall in the IPO-PDO dry states is found to be 15%-28% lower than the wet state at the case study sites. In comparison, an annual lag-one autoregressive model is unable to adequately capture the observed rainfall distribution within separate IPO-PDO states. Copyright © 2011 by the American Geophysical Union.Benjamin J. Henley, Mark A. Thyer, George Kuczera and Stewart W. Frank

    Understanding predictive uncertainty in hydrologic modeling: The challenge of identifying input and structural errors

    Get PDF
    Meaningful quantification of data and structural uncertainties in conceptual rainfall-runoff modeling is a major scientific and engineering challenge. This paper focuses on the total predictive uncertainty and its decomposition into input and structural components under different inference scenarios. Several Bayesian inference schemes are investigated, differing in the treatment of rainfall and structural uncertainties, and in the precision of the priors describing rainfall uncertainty. Compared with traditional lumped additive error approaches, the quantification of the total predictive uncertainty in the runoff is improved when rainfall and/or structural errors are characterized explicitly. However, the decomposition of the total uncertainty into individual sources is more challenging. In particular, poor identifiability may arise when the inference scheme represents rainfall and structural errors using separate probabilistic models. The inference becomes ill‐posed unless sufficiently precise prior knowledge of data uncertainty is supplied; this ill‐posedness can often be detected from the behavior of the Monte Carlo sampling algorithm. Moreover, the priors on the data quality must also be sufficiently accurate if the inference is to be reliable and support meaningful uncertainty decomposition. Our findings highlight the inherent limitations of inferring inaccurate hydrologic models using rainfall‐runoff data with large unknown errors. Bayesian total error analysis can overcome these problems using independent prior information. The need for deriving independent descriptions of the uncertainties in the input and output data is clearly demonstrated.Benjamin Renard, Dmitri Kavetski, George Kuczera, Mark Thyer, and Stewart W. Frank

    A limited memory acceleration strategy for MCMC sampling in hierarchical Bayesian calibration of hydrological models

    Get PDF
    Hydrological calibration and prediction using conceptual models is affected by forcing/response data uncertainty and structural model error. The Bayesian Total Error Analysis methodology uses a hierarchical representation of individual sources of uncertainty. However, it is shown that standard multiblock “Metropolis-within-Gibbs” Markov chain Monte Carlo (MCMC) samplers commonly used in Bayesian hierarchical inference are exceedingly computationally expensive when applied to hydrologic models, which use recursive numerical solutions of coupled nonlinear differential equations to describe the evolution of catchment states such as soil and groundwater storages. This note develops a “limited-memory” algorithm for accelerating multiblock MCMC sampling from the posterior distributions of such models using low-dimensional jump distributions. The new algorithm exploits the decaying memory of hydrological systems to provide accurate tolerance-based approximations of traditional “full-memory” MCMC methods and is orders of magnitude more efficient than the latter.George Kuczera, Dmitri Kavetski, Benjamin Renard and Mark Thye

    Probabilistic optimization for conceptual rainfall-runoff models: a comparison of the shuffled complex evolution and simulated annealing algorithms

    Get PDF
    Automatic optimization algorithms are used routinely to calibrate conceptual rainfall-runoff (CRR) models. The goal of calibration is to estimate a feasible and unique (global) set of parameter estimates that best fit the observed runoff data. Most if not all optimization algorithms have difficulty in locating the global optimum because of response surfaces that contain multiple local optima with regions of attraction of differing size, discontinuities, and long ridges and valleys. Extensive research has been undertaken to develop efficient and robust global optimization algorithms over the last 10 years. This study compares the performance of two probabilistic global optimization methods: the shuffled complex evolution algorithm SCE-UA, and the three-phase simulated annealing algorithm SA-SX. Both algorithms are used to calibrate two parameter sets of a modified version of Boughtoh's [1984] SFB model using data from two Australian catchments that have low and high runoff yields. For the reduced, well-identified parameter set the algorithms have a similar efficiency for the low-yielding catchment, but SCE-UA is almost twice as robust. Although the robustness of the algorithms is similar for the high-yielding catchment, SCE-UA is six times more efficient than SA-SX. When fitting the full parameter set the performance of SA-SX deteriorated markedly for both catchments. These results indicated that SCE-UA's use of multiple complexes and shuffling provided a more effective search of the parameter space than SA-SX's single simplex with stochastic step acceptance criterion, especially when the level of parameterization is increased. Examination of the response surface for the low-yielding catchment revealed some reasons why SCE-UA outperformed SA-SX and why probabilistic optimization algorithms can experience difficulty in locating the global optimum.Mark Thyer and George Kuczera, Bryson C. Bate

    Modeling long-term persistence in hydroclimatic time series using a hidden state Markov model

    Get PDF
    A hidden state Markov (HSM) model is developed as a new approach for generating hydroclimatic time series with long-term persistence. The two-state HSM model is motivated by the fact that the interaction of global climatic mechanisms produces alternating wet and dry regimes in Australian hydroclimatic time series. The HSM model provides an explicit mechanism to stochastically simulate these quasi-cyclic wet and dry periods. This is conceptually sounder than the current stochastic models used for hydroclimatic time series simulation. Models such as the lag-one autoregressive (AR(1)) model have no explicit mechanism for simulating the wet and dry regimes. In this study the HSM model was calibrated to four long-term Australian hydroclimatic data sets. A Markov Chain Monte Carlo method known as the Gibbs sampler was used for model calibration. The results showed that the locations significantly influenced by tropical weather systems supported the assumptions of the HSM modeling framework and indicated a strong persistence structure. In contrast, the calibration of the AR(1) model to these data sets produced no statistically significant evidence of persistence.Mark Thyer and George Kucze

    Goulburn River experimental catchment data set

    Get PDF
    This paper describes the data set from the 6540-km2 Goulburn River experimental catchment in New South Wales, Australia. Data have been archived from this experimental catchment since its inception in September 2002. Land use in the northern half of the catchment is predominantly cropping and grazing on basalt-derived soils, with the south being cattle and sheep grazing on sandstone-derived soils; only the floodplains are cleared of trees in the south. Monitoring sites are mainly concentrated in the nested Merriwa (651 km2) and Krui (562 km2) subcatchments in the northern half of this experimental catchment with a few monitoring sites located in the south. The data set comprises soil temperature and moisture profile measurements from 26 locations; meteorological data from two automated weather stations (data from a further three stations are available from other sources) including precipitation, atmospheric pressure, air temperature and relative humidity, wind speed and direction, soil heat flux, and up- and down-welling shortand long-wave radiation; streamflow observations at five nested locations (data from a further three locations are available from other sources); a total of three surface soil moisture maps across a 40 km x 50 km region in the north from ~ 200 measurement locations during intensive field campaigns; and a high-resolution digital elevation model (DEM) of a 175-ha microcatchment in the Krui catchment. These data are available on the World Wide Web at http://www.sasmas.unimelb.edu.au.Christoph Rüdiger, Greg Hancock, Herbert M. Hemakumara, Barry Jacobs, Jetse D. Kalma, Cristina Martinez, Mark Thyer, Jeffrey P. Walker, Tony Wells, and Garry R. Willgoos

    Diagnosing a distributed hydrologic model for two high-elevation forested catchments based on detailed stand- and basin-scale data

    Get PDF
    This study evaluates the performance and internal structure of the distributed hydrology soil vegetation model (DHSVM) using 1998-2001 data collected at Upper Penticton Creek, British Columbia, Canada. It is shown that clear-cut snowmelt rates calculated using data-derived snow albedo curves are in agreement with observed lysimeter outflow. Measurements in a forest stand with 50% air crown closure suggest that the fraction of shortwave radiation transmitted through the canopy is 0.18-0.28 while the hemispherical canopy view factor controlling longwave radiation fluxes to the forest snowpack is estimated at 0.81 ± 0.07. DHSVM overestimates shortwave transmittance (0.50) and underestimates the view factor (0.50). An alternative forest radiation balance is formulated that is consistent with the measurements. This new formulation improves model efficiency in simulating streamflow from 0.84 to 0.91 due to greater early season melt that results from the enhanced importance of longwave radiation below the canopy. The model captures differences in canopy rainfall interception between small and large storms, tree transpiration measured over a 6-day summer period, and differences in soil moisture between a dry and a wet summer. While the model was calibrated to 1999 snow water equivalent (SWE) and hydrograph data for the untreated control basin, it successfully simulates forest and clear-cut SWE and streamflow for the 3 other years and 4 years of preharvesting and postharvesting streamflow for the second basin. Comparison of model states with the large array of observations suggests that the modified model provides a reliable tool for assessing forest management impacts in the region.Mark Thyer, Jos Beckers, Dave Spittlehouse, Younes Alila, and Rita Winkle

    Critical evaluation of parameter consistency and predictive uncertainty in hydrological modeling: A case study using Bayesian total error analysis

    Get PDF
    The lack of a robust framework for quantifying the parametric and predictive uncertainty of conceptual rainfall‐runoff (CRR) models remains a key challenge in hydrology. The Bayesian total error analysis (BATEA) methodology provides a comprehensive framework to hypothesize, infer, and evaluate probability models describing input, output, and model structural error. This paper assesses the ability of BATEA and standard calibration approaches (standard least squares (SLS) and weighted least squares (WLS)) to address two key requirements of uncertainty assessment: (1) reliable quantification of predictive uncertainty and (2) reliable estimation of parameter uncertainty. The case study presents a challenging calibration of the lumped GR4J model to a catchment with ephemeral responses and large rainfall gradients. Postcalibration diagnostics, including checks of predictive distributions using quantile‐quantile analysis, suggest that while still far from perfect, BATEA satisfied its assumed probability models better than SLS and WLS. In addition, WLS/SLS parameter estimates were highly dependent on the selected rain gauge and calibration period. This will obscure potential relationships between CRR parameters and catchment attributes and prevent the development of meaningful regional relationships. Conversely, BATEA provided consistent, albeit more uncertain, parameter estimates and thus overcomes one of the obstacles to parameter regionalization. However, significant departures from the calibration assumptions remained even in BATEA, e.g., systematic overestimation of predictive uncertainty, especially in validation. This is likely due to the inferred rainfall errors compensating for simplified treatment of model structural error.Mark Thyer, Benjamin Renard, Dmitri Kavetski, George Kuczera, Stewart William Franks and Sri Srikantha
    corecore